607 research outputs found

    Multiobjective optimization and decision making in engineering sciences

    Get PDF
    Real-world decision making problems in various fields including engineering sciences are becoming ever more challenging to address. The consideration of various competing criteria related to, for example, business, technical, workforce, safety and environmental aspects increases the complexity of decision making and leads to problems that feature multiple competing criteria. A key challenge in such problems is the identification of the most preferred trade-off solution(s) with respect to the competing criteria. Therefore, the effective combination of data, skills, and advanced engineering and management technologies is becoming a key asset to a company urging the need to rethink how to tackle modern decision making problems. This special issue focuses on the intersection between engineering, multiple criteria decision making, multiobjective optimization, and data science. The development of new models and algorithmic methods to solve such problems is in the focus as much as the application of these concepts to real problems. This special issue was motivated by the 25th International Conference on Multiple Criteria Decision Making (MCDM2019) held in Istanbul, Turkey, in 2019.nonPeerReviewe

    Bioprocess economics and optimization of continuous and pre-packed disposable chromatography

    Get PDF
    The biotech sector is facing increasing pressures to design more cost-efficient, robust and flexible manufacturing processes. Standard batch chromatography (BATCH) is an established but expensive approach to separate impurities related with both E.coli and mammalian cells expression systems. This study uses a computational framework to investigate if the application of continuous chromatography (CONTI) and disposable technologies can provide a competitive alternative to BATCH and reusable equipment. A set of general assumptions is presented on how some of the key downstream processing characteristics, such as chromatography operating conditions, resin properties and equipment requirements, vary as a function of the chromatography mode adopted, BATCH vs CONTI, and the column type used, self-packed glass (SP GLASS) vs pre-packed disposable (PP DISPO). These assumptions are then used within the framework, which comprises a detailed process economics model, to explore switching points between the two chromatography modes and column types for different upstream configurations and resin properties focusing on a single chromatography step. Following this, an evolutionary optimization algorithm is linked to the framework to optimize the setup of an entire antibody purification train consisting of multiple chromatography steps: Alongside the chromatography mode and column type, the framework optimized also critical decisions relating to the chromatography sequence, equipment sizing strategy and the operating conditions adopted for each chromatography step, subject to multiple demand and process-related (resin requirement) constraints. The framework is validated for different production scales including early phase, phase III, and commercial scale. To facilitate decision making, methods for visualizing the switching points and trade-offs exhibited by the optimal purification processes found by the framework are provided

    Safe Learning and Optimization Techniques: Towards a Survey of the State of the Art

    Full text link
    Safe learning and optimization deals with learning and optimization problems that avoid, as much as possible, the evaluation of non-safe input points, which are solutions, policies, or strategies that cause an irrecoverable loss (e.g., breakage of a machine or equipment, or life threat). Although a comprehensive survey of safe reinforcement learning algorithms was published in 2015, a number of new algorithms have been proposed thereafter, and related works in active learning and in optimization were not considered. This paper reviews those algorithms from a number of domains including reinforcement learning, Gaussian process regression and classification, evolutionary algorithms, and active learning. We provide the fundamental concepts on which the reviewed algorithms are based and a characterization of the individual algorithms. We conclude by explaining how the algorithms are connected and suggestions for future research.Comment: The final authenticated publication was made In: Heintz F., Milano M., O'Sullivan B. (eds) Trustworthy AI - Integrating Learning, Optimization and Reasoning. TAILOR 2020. Lecture Notes in Computer Science, vol 12641. Springer, Cham. The final authenticated publication is available online at \<https://doi.org/10.1007/978-3-030-73959-1_12

    Model-agnostic variable importance for predictive uncertainty: an entropy-based approach

    Full text link
    In order to trust the predictions of a machine learning algorithm, it is necessary to understand the factors that contribute to those predictions. In the case of probabilistic and uncertainty-aware models, it is necessary to understand not only the reasons for the predictions themselves, but also the model's level of confidence in those predictions. In this paper, we show how existing methods in explainability can be extended to uncertainty-aware models and how such extensions can be used to understand the sources of uncertainty in a model's predictive distribution. In particular, by adapting permutation feature importance, partial dependence plots, and individual conditional expectation plots, we demonstrate that novel insights into model behaviour may be obtained and that these methods can be used to measure the impact of features on both the entropy of the predictive distribution and the log-likelihood of the ground truth labels under that distribution. With experiments using both synthetic and real-world data, we demonstrate the utility of these approaches in understanding both the sources of uncertainty and their impact on model performance

    Towards Developing a Virtual Guitar Instructor through Biometrics Informed Human-Computer Interaction

    Get PDF
    Within the last few years, wearable sensor technologies have allowed us to access novel biometrics that give us the ability to connect musical gesture to computing systems. Doing this affords us to study how we perform musically and understand the process at data level. However, biometric information is complex and cannot be directly mapped to digital systems. In this work, we study how guitar performance techniques can be captured/analysed towards developing an AI which can provide real-time feedback to guitar students. We do this by performing musical exercises on the guitar whilst acquiring and processing biometric (plus audiovisual) information during their performance. Our results show: there are notable differences within biometrics when playing a guitar scale in two different ways (legato and staccato) and this outcome can be used to motivate our intention to build an AI guitar tutor

    A Study of Scalarisation Techniques for Multi-Objective QUBO Solving

    Full text link
    In recent years, there has been significant research interest in solving Quadratic Unconstrained Binary Optimisation (QUBO) problems. Physics-inspired optimisation algorithms have been proposed for deriving optimal or sub-optimal solutions to QUBOs. These methods are particularly attractive within the context of using specialised hardware, such as quantum computers, application specific CMOS and other high performance computing resources for solving optimisation problems. These solvers are then applied to QUBO formulations of combinatorial optimisation problems. Quantum and quantum-inspired optimisation algorithms have shown promising performance when applied to academic benchmarks as well as real-world problems. However, QUBO solvers are single objective solvers. To make them more efficient at solving problems with multiple objectives, a decision on how to convert such multi-objective problems to single-objective problems need to be made. In this study, we compare methods of deriving scalarisation weights when combining two objectives of the cardinality constrained mean-variance portfolio optimisation problem into one. We show significant performance improvement (measured in terms of hypervolume) when using a method that iteratively fills the largest space in the Pareto front compared to a n\"aive approach using uniformly generated weights

    A Data-Driven Framework for Identifying Investment Opportunities in Private Equity

    Get PDF
    The core activity of a Private Equity (PE) firm is to invest into companies in order to provide the investors with profit, usually within 4-7 years. To invest into a company or not is typically done manually by looking at various performance indicators of the company and then making a decision often based on instinct. This process is rather unmanageable given the large number of companies to potentially invest. Moreover, as more data about company performance indicators becomes available and the number of different indicators one may want to consider increases, manual crawling and assessment of investment opportunities becomes inefficient and ultimately impossible. To address these issues, this paper proposes a framework for automated data-driven screening of investment opportunities and thus the recommendation of businesses to invest in. The framework draws on data from several sources to assess the financial and managerial position of a company, and then uses an explainable artificial intelligence (XAI) engine to suggest investment recommendations. The robustness of the model is validated using different AI algorithms, class imbalance-handling methods, and features extracted from the available data sources
    • …
    corecore